59 resultados para Agent-based model
Resumo:
The channel-based model of duration perception postulates the existence of neural mechanisms that respond selectively to a narrow range of stimulus durations centred on their preferred duration (Heron et al Proceedings of the Royal Society B 279 690–698). In principle the channel-based model could
explain recent reports of adaptation-induced, visual duration compression effects (Johnston et al Current Biology 16 472–479; Curran and Benton Cognition 122 252–257); from this perspective duration compression is a consequence of the adapting stimuli being presented for a longer duration than the test stimuli. In the current experiment observers adapted to a sequence of moving random dot patterns at the same retinal position, each 340ms in duration and separated by a variable (500–1000ms) interval. Following adaptation observers judged the duration of a 600ms test stimulus at the same location. The test stimulus moved in the same, or opposite, direction as the adaptor. Contrary to the channel-based
model’s prediction, test stimulus duration appeared compressed, rather than expanded, when it moved in the same direction as the adaptor. That test stimulus duration was not distorted when moving in the opposite direction further suggests that visual timing mechanisms are influenced by additional neural processing associated with the stimulus being timed.
Resumo:
In this research, an agent-based model (ABM) was developed to generate human movement routes between homes and water resources in a rural setting, given commonly available geospatial datasets on population distribution, land cover and landscape resources. ABMs are an object-oriented computational approach to modelling a system, focusing on the interactions of autonomous agents, and aiming to assess the impact of these agents and their interactions on the system as a whole. An A* pathfinding algorithm was implemented to produce walking routes, given data on the terrain in the area. A* is an extension of Dijkstra's algorithm with an enhanced time performance through the use of heuristics. In this example, it was possible to impute daily activity movement patterns to the water resource for all villages in a 75 km long study transect across the Luangwa Valley, Zambia, and the simulated human movements were statistically similar to empirical observations on travel times to the water resource (Chi-squared, 95% confidence interval). This indicates that it is possible to produce realistic data regarding human movements without costly measurement as is commonly achieved, for example, through GPS, or retrospective or real-time diaries. The approach is transferable between different geographical locations, and the product can be useful in providing an insight into human movement patterns, and therefore has use in many human exposure-related applications, specifically epidemiological research in rural areas, where spatial heterogeneity in the disease landscape, and space-time proximity of individuals, can play a crucial role in disease spread.
Resumo:
There has been much interest in the belief–desire–intention (BDI) agent-based model for developing scalable intelligent systems, e.g. using the AgentSpeak framework. However, reasoning from sensor information in these large-scale systems remains a significant challenge. For example, agents may be faced with information from heterogeneous sources which is uncertain and incomplete, while the sources themselves may be unreliable or conflicting. In order to derive meaningful conclusions, it is important that such information be correctly modelled and combined. In this paper, we choose to model uncertain sensor information in Dempster–Shafer (DS) theory. Unfortunately, as in other uncertainty theories, simple combination strategies in DS theory are often too restrictive (losing valuable information) or too permissive (resulting in ignorance). For this reason, we investigate how a context-dependent strategy originally defined for possibility theory can be adapted to DS theory. In particular, we use the notion of largely partially maximal consistent subsets (LPMCSes) to characterise the context for when to use Dempster’s original rule of combination and for when to resort to an alternative. To guide this process, we identify existing measures of similarity and conflict for finding LPMCSes along with quality of information heuristics to ensure that LPMCSes are formed around high-quality information. We then propose an intelligent sensor model for integrating this information into the AgentSpeak framework which is responsible for applying evidence propagation to construct compatible information, for performing context-dependent combination and for deriving beliefs for revising an agent’s belief base. Finally, we present a power grid scenario inspired by a real-world case study to demonstrate our work.
Resumo:
The TELL ME agent based model simulates the connections between health agency communication, personal decisions to adopt protective behaviour during an influenza epidemic, and the effect of those decisions on epidemic progress. The behaviour decisions are modelled with a combination of personal attitude, behaviour adoption by neighbours, and the local recent incidence of influenza. This paper sets out and justifies the model design, including how these decision factors have been operationalised. By exploring the effects of different communication strategies, the model is intended to assist health authorities with their influenza epidemic communication plans. It can both assist users to understand the complex interactions between communication, personal behaviour and epidemic progress, and guide future data collection to improve communication planning.
Resumo:
Traditional experimental economics methods often consume enormous resources of qualified human participants, and the inconsistence of a participant’s decisions among repeated trials prevents investigation from sensitivity analyses. The problem can be solved if computer agents are capable of generating similar behaviors as the given participants in experiments. An experimental economics based analysis method is presented to extract deep information from questionnaire data and emulate any number of participants. Taking the customers’ willingness to purchase electric vehicles (EVs) as an example, multi-layer correlation information is extracted from a limited number of questionnaires. Multi-agents mimicking the inquired potential customers are modelled through matching the probabilistic distributions of their willingness embedded in the questionnaires. The authenticity of both the model and the algorithm is validated by comparing the agent-based Monte Carlo simulation results with the questionnaire-based deduction results. With the aid of agent models, the effects of minority agents with specific preferences on the results are also discussed.
Resumo:
Adjoint methods have proven to be an efficient way of calculating the gradient of an objective function with respect to a shape parameter for optimisation, with a computational cost nearly independent of the number of the design variables [1]. The approach in this paper links the adjoint surface sensitivities (gradient of objective function with respect to the surface movement) with the parametric design velocities (movement of the surface due to a CAD parameter perturbation) in order to compute the gradient of the objective function with respect to CAD variables.
For a successful implementation of shape optimization strategies in practical industrial cases, the choice of design variables or parameterisation scheme used for the model to be optimized plays a vital role. Where the goal is to base the optimization on a CAD model the choices are to use a NURBS geometry generated from CAD modelling software, where the position of the NURBS control points are the optimisation variables [2] or to use the feature based CAD model with all of the construction history to preserve the design intent [3]. The main advantage of using the feature based model is that the optimized model produced can be directly used for the downstream applications including manufacturing and process planning.
This paper presents an approach for optimization based on the feature based CAD model, which uses CAD parameters defining the features in the model geometry as the design variables. In order to capture the CAD surface movement with respect to the change in design variable, the “Parametric Design Velocity” is calculated, which is defined as the movement of the CAD model boundary in the normal direction due to a change in the parameter value.
The approach presented here for calculating the design velocities represents an advancement in terms of capability and robustness of that described by Robinson et al. [3]. The process can be easily integrated to most industrial optimisation workflows and is immune to the topology and labelling issues highlighted by other CAD based optimisation processes. It considers every continuous (“real value”) parameter type as an optimisation variable, and it can be adapted to work with any CAD modelling software, as long as it has an API which provides access to the values of the parameters which control the model shape and allows the model geometry to be exported. To calculate the movement of the boundary the methodology employs finite differences on the shape of the 3D CAD models before and after the parameter perturbation. The implementation procedure includes calculating the geometrical movement along a normal direction between two discrete representations of the original and perturbed geometry respectively. Parametric design velocities can then be directly linked with adjoint surface sensitivities to extract the gradients to use in a gradient-based optimization algorithm.
The optimisation of a flow optimisation problem is presented, in which the power dissipation of the flow in an automotive air duct is to be reduced by changing the parameters of the CAD geometry created in CATIA V5. The flow sensitivities are computed with the continuous adjoint method for a laminar and turbulent flow [4] and are combined with the parametric design velocities to compute the cost function gradients. A line-search algorithm is then used to update the design variables and proceed further with optimisation process.
Resumo:
This paper presents a novel approach based on the use of evolutionary agents for epipolar geometry estimation. In contrast to conventional nonlinear optimization methods, the proposed technique employs each agent to denote a minimal subset to compute the fundamental matrix, and considers the data set of correspondences as a 1D cellular environment, in which the agents inhabit and evolve. The agents execute some evolutionary behavior, and evolve autonomously in a vast solution space to reach the optimal (or near optima) result. Then three different techniques are proposed in order to improve the searching ability and computational efficiency of the original agents. Subset template enables agents to collaborate more efficiently with each other, and inherit accurate information from the whole agent set. Competitive evolutionary agent (CEA) and finite multiple evolutionary agent (FMEA) apply a better evolutionary strategy or decision rule, and focus on different aspects of the evolutionary process. Experimental results with both synthetic data and real images show that the proposed agent-based approaches perform better than other typical methods in terms of accuracy and speed, and are more robust to noise and outliers.
Resumo:
The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.
Resumo:
In a recently published study, Sloutsky and Fisher [Sloutsky, V. M., & Fisher, A.V. (2004a). When development and learning decrease memory: Evidence against category-based induction in children. Psychological Science, 15, 553-558; Sloutsky, V. M., & Fisher, A. V. (2004b). Induction and categorization in young children: A similarity-based model. Journal of Experimental Psychology: General, 133, 166-188.] demonstrated that children have better memory for the items that they generalise to than do adults. On the basis of this finding, they claim that children and adults use different mechanisms for inductive generalisations;whereas adults focus on shared category membership, children project properties on the basis of perceptual similarity. Sloutsky & Fisher attribute children's enhanced recognition memory to the more detailed processing required by this similarity-based mechanism. In Experiment I we show that children look at the stimulus items for longer than adults. In Experiment 2 we demonstrate that although when given just 250 ms to inspect the items children remain capable of making accurate inferences, their subsequent memory for those items decreases significantly. These findings suggest that there are no necessary conclusions to be drawn from Sloutsky & Fisher's results about developmental differences in generalisation strategy. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Homology modeling was used to build 3D models of the N-methyl-D-aspartate (NMDA) receptor glycine binding site on the basis of an X-ray structure of the water-soluble AMPA-sensitive receptor. The docking of agonists and antagonists to these models was used to reveal binding modes of ligands and to explain known structure-activity relationships. Two types of quantitative models, 3D-QSAR/CoMFA and a regression model based on docking energies, were built for antagonists (derivatives of 4-hydroxy-2-quinolone, quinoxaline-2,3-dione, and related compounds). The CoMFA steric and electrostatic maps were superimposed on the homology-based model, and a close correspondence was marked. The derived computational models have permitted the evaluation of the structural features crucial for high glycine binding site affinity and are important for the design of new ligands.
Resumo:
The agent-based social simulation component of the TELL ME project (WP4) developed prototype software to assist communications planners to understand the complex relationships between communication, personal protective behaviour and epidemic spread. Using the simulation, planners can enter different potential communications plans, and see their simulated effect on attitudes, behaviour and the consequent effect on an influenza epidemic.
The model and the software to run the model are both freely available (see section 2.2.1 for instructions on how to obtain the relevant files). This report provides the documentation for the prototype software. The major component is the user guide (Section 2). This provides instructions on how to set up the software, some training scenarios to become familiar with the model operation and use, and details about the model controls and output.
The model contains many parameters. Default values and their source are described at Section 3. These are unlikely to be suitable for all countries, and may also need to be changed as new research is conducted. Instructions for how to customise these values are also included (see section 3.5).
The final technical reference contains two parts. The first is a guide for advanced users who wish to run multiple simulations and analyse the results (section 4.1). The second is to orient programmers who wish to adapt or extend the simulation model (section 4.2). This material is not suitable for general users.
Resumo:
Title: The £ for lb. Challenge – A lose - win – win scenario. Results from a novel workplace-based, peer-led weight management programme in 2016.
Names: Damien Bennett, Declan Bradley, Angela McComb, Amy Kiernan, Tracey Owen
Background: Tackling obesity is a public health priority. The £ for lb. Challenge is the first country wide, workplace-based peer-led weight management programme in the UK or Ireland with participants from a range of private and public businesses in Northern Ireland (NI).
Intervention: The intervention was workplace-based, led by workplace Champions and based on the NHS Choices 12 week weight loss guide. It operated from January to April 2016. Overweight and obese adult workers were eligible. Training of Peer Champions (staff volunteers) involved two half day workshops delivered by dieticians and physical activity professionals.
Outcome measurement: Weight was measured at enrolment and 12 weekly intervals. Changes in weight, % weight, BMI and % BMI were determined for the whole cohort and sex and deprivation subgroups.
Results: There were 1513 eligible participants from 35 companies. Engagement rate was 98%. 75% of participants completed the programme. Mean weight loss was 2.4 kg or 2.7%. Almost a quarter (24%) lost at least 5% initial bodyweight. Male participants were over twice as likely to complete the programme and three times more likely to lose 5% body weight or more. Over £17,000 was raised for NI charities.
Discussion: The £ for lb. Challenge is a successful health improvement programme with important weight loss for many participants, particularly male workers. With high levels of user engagement and ownership and successful multidisciplinary collaboration between public health, voluntary bodies, private and public companies it is a novel workplace based model with potential to expand.
Resumo:
PEGS (Production and Environmental Generic Scheduler) is a generic production scheduler that produces good schedules over a wide range of problems. It is centralised, using search strategies with the Shifting Bottleneck algorithm. We have also developed an alternative distributed approach using software agents. In some cases this reduces run times by a factor of 10 or more. In most cases, the agent-based program also produces good solutions for published benchmark data, and the short run times make our program useful for a large range of problems. Test results show that the agents can produce schedules comparable to the best found so far for some benchmark datasets and actually better schedules than PEGS on our own random datasets. The flexibility that agents can provide for today's dynamic scheduling is also appealing. We suggest that in this sort of generic or commercial system, the agent-based approach is a good alternative.
Resumo:
The mechanism whereby the foundation loading is transmitted through stone the column (included in soft clay) has received less attention from researchers. This paper reports on some interesting findings obtained from a laboratory-based model study in respect of this issue. The stone column, included in the soft clay bed was subjected to foundation loading under drained conditions. The results show, probably for the first time, how the foundation loadings are transmitted through the column and indeed the existence of “negative skin friction” (a widely accepted phenomena in solid piles) in granular columns in soft clays.
Resumo:
While the incorporation of mathematical and engineering methods has greatly advanced in other areas of the life sciences, they have been under-utilized in the field of animal welfare. Exceptions are beginning to emerge and share a common motivation to quantify 'hidden' aspects in the structure of the behaviour of an individual, or group of animals. Such analyses have the potential to quantify behavioural markers of pain and stress and quantify abnormal behaviour objectively. This review seeks to explore the scope of such analytical methods as behavioural indicators of welfare. We outline four classes of analyses that can be used to quantify aspects of behavioural organization. The underlying principles, possible applications and limitations are described for: fractal analysis, temporal methods, social network analysis, and agent-based modelling and simulation. We hope to encourage further application of analyses of behavioural organization by highlighting potential applications in the assessment of animal welfare, and increasing awareness of the scope for the development of new mathematical methods in this area.