883 resultados para Agent-based systems
Resumo:
In order to gain knowledge from large databases, scalable data mining technologies are needed. Data are captured on a large scale and thus databases are increasing at a fast pace. This leads to the utilisation of parallel computing technologies in order to cope with large amounts of data. In the area of classification rule induction, parallelisation of classification rules has focused on the divide and conquer approach, also known as the Top Down Induction of Decision Trees (TDIDT). An alternative approach to classification rule induction is separate and conquer which has only recently been in the focus of parallelisation. This work introduces and evaluates empirically a framework for the parallel induction of classification rules, generated by members of the Prism family of algorithms. All members of the Prism family of algorithms follow the separate and conquer approach.
Resumo:
The Earth’s climate, as well as planetary climates in general, is broadly regulated by three fundamental parameters: the total solar irradiance, the planetary albedo and the planetary emissivity. Observations from series of different satellites during the last three decades indicate that these three quantities are generally very stable. The total solar irradiation of some 1,361 W/m2 at 1 A.U. varies within 1 W/m2 during the 11-year solar cycle (Fröhlich 2012). The albedo is close to 29 % with minute changes from year to year but with marked zonal differences (Stevens and Schwartz 2012). The only exception to the overall stability is a minor decrease in the planetary emissivity (the ratio between the radiation to space and the radiation from the surface of the Earth). This is a consequence of the increase in atmospheric greenhouse gas amounts making the atmosphere gradually more opaque to long-wave terrestrial radiation. As a consequence, radiation processes are slightly out of balance as less heat is leaving the Earth in the form of thermal radiation than the amount of heat from the incoming solar radiation. Present space-based systems cannot yet measure this imbalance, but the effect can be inferred from the increase in heat in the oceans where most of the heat accumulates. Minor amounts of heat are used to melt ice and to warm the atmosphere and the surface of the Earth.
Resumo:
Radar refractivity retrievals can capture near-surface humidity changes, but noisy phase changes of the ground clutter returns limit the accuracy for both klystron- and magnetron-based systems. Observations with a C-band (5.6 cm) magnetron weather radar indicate that the correction for phase changes introduced by local oscillator frequency changes leads to refractivity errors no larger than 0.25 N units: equivalent to a relative humidity change of only 0.25% at 20°C. Requested stable local oscillator (STALO) frequency changes were accurate to 0.002 ppm based on laboratory measurements. More serious are the random phase change errors introduced when targets are not at the range-gate center and there are changes in the transmitter frequency (ΔfTx) or the refractivity (ΔN). Observations at C band with a 2-μs pulse show an additional 66° of phase change noise for a ΔfTx of 190 kHz (34 ppm); this allows the effect due to ΔN to be predicted. Even at S band with klystron transmitters, significant phase change noise should occur when a large ΔN develops relative to the reference period [e.g., ~55° when ΔN = 60 for the Next Generation Weather Radar (NEXRAD) radars]. At shorter wavelengths (e.g., C and X band) and with magnetron transmitters in particular, refractivity retrievals relative to an earlier reference period are even more difficult, and operational retrievals may be restricted to changes over shorter (e.g., hourly) periods of time. Target location errors can be reduced by using a shorter pulse or identified by a new technique making alternate measurements at two closely spaced frequencies, which could even be achieved with a dual–pulse repetition frequency (PRF) operation of a magnetron transmitter.
Resumo:
Performance modelling is a useful tool in the lifeycle of high performance scientific software, such as weather and climate models, especially as a means of ensuring efficient use of available computing resources. In particular, sufficiently accurate performance prediction could reduce the effort and experimental computer time required when porting and optimising a climate model to a new machine. In this paper, traditional techniques are used to predict the computation time of a simple shallow water model which is illustrative of the computation (and communication) involved in climate models. These models are compared with real execution data gathered on AMD Opteron-based systems, including several phases of the U.K. academic community HPC resource, HECToR. Some success is had in relating source code to achieved performance for the K10 series of Opterons, but the method is found to be inadequate for the next-generation Interlagos processor. The experience leads to the investigation of a data-driven application benchmarking approach to performance modelling. Results for an early version of the approach are presented using the shallow model as an example.
Resumo:
Spatio-temporal landscape heterogeneity has rarely been considered in population-level impact assessments. Here we test whether landscape heterogeneity is important by examining the case of a pesticide applied seasonally to orchards which may affect non-target vole populations, using a validated ecologically realistic and spatially explicit agent-based model. Voles thrive in unmanaged grasslands and untreated orchards but are particularly exposed to applied pesticide treatments during dispersal between optimal habitats. We therefore hypothesised that vole populations do better (1) in landscapes containing more grassland and (2) where areas of grassland are closer to orchards, but (3) do worse if larger areas of orchards are treated with pesticide. To test these hyposeses we made appropriate manipulations to a model landscape occupied by field voles. Pesticide application reduced model population sizes in all three experiments, but populations subsequently wholly or partly recovered. Population depressions were, as predicted, lower in landscapes containing more unmanaged grassland, in landscapes with reduced distance between grassland and orchards, and in landscapes with fewer treated orchards. Population recovery followed a similar pattern except for an unexpected improvement in recovery when the area of treated orchards was increased. Outside the period of pesticide application, orchards increase landscape connectivity and facilitate vole dispersal and so speed population recovery. Overall our results show that accurate prediction of population impact cannot be achieved without taking account of landscape structure. The specifics of landscape structure and habitat connectivity are likely always important in mediating the effects of stressors.
Resumo:
The four Cluster spacecraft offer a unique opportunity to study structure and dynamics in the magnetosphere and we discuss four general ways in which ground-based remote-sensing observations of the ionosphere can be used to support the in-situ measurements. The ionosphere over the Svalbard islands will be studied in particular detail, not only by the ESR and EISCAT incoherent scatter radars, but also by optical instruments, magnetometers, imaging riometers and the CUTLASS bistatic HF radar. We present an on-line procedure to plan coordinated measurements by the Cluster spacecraft with these combined ground-based systems. We illustrate the philosophy of the method, using two important examples of the many possible configurations between the Cluster satellites and the ground-based instruments.
Resumo:
More and more households are purchasing electric vehicles (EVs), and this will continue as we move towards a low carbon future. There are various projections as to the rate of EV uptake, but all predict an increase over the next ten years. Charging these EVs will produce one of the biggest loads on the low voltage network. To manage the network, we must not only take into account the number of EVs taken up, but where on the network they are charging, and at what time. To simulate the impact on the network from high, medium and low EV uptake (as outlined by the UK government), we present an agent-based model. We initialise the model to assign an EV to a household based on either random distribution or social influences - that is, a neighbour of an EV owner is more likely to also purchase an EV. Additionally, we examine the effect of peak behaviour on the network when charging is at day-time, night-time, or a mix of both. The model is implemented on a neighbourhood in south-east England using smart meter data (half hourly electricity readings) and real life charging patterns from an EV trial. Our results indicate that social influence can increase the peak demand on a local level (street or feeder), meaning that medium EV uptake can create higher peak demand than currently expected.
Resumo:
Pasture-based ruminant production systems are common in certain areas of the world, but energy evaluation in grazing cattle is performed with equations developed, in their majority, with sheep or cattle fed total mixed rations. The aim of the current study was to develop predictions of metabolisable energy (ME) concentrations in fresh-cut grass offered to non-pregnant non-lactating cows at maintenance energy level, which may be more suitable for grazing cattle. Data were collected from three digestibility trials performed over consecutive grazing seasons. In order to cover a range of commercial conditions and data availability in pasture-based systems, thirty-eight equations for the prediction of energy concentrations and ratios were developed. An internal validation was performed for all equations and also for existing predictions of grass ME. Prediction error for ME using nutrient digestibility was lowest when gross energy (GE) or organic matter digestibilities were used as sole predictors, while the addition of grass nutrient contents reduced the difference between predicted and actual values, and explained more variation. Addition of N, GE and diethyl ether extract (EE) contents improved accuracy when digestible organic matter in DM was the primary predictor. When digestible energy was the primary explanatory variable, prediction error was relatively low, but addition of water-soluble carbohydrates, EE and acid-detergent fibre contents of grass decreased prediction error. Equations developed in the current study showed lower prediction errors when compared with those of existing equations, and may thus allow for an improved prediction of ME in practice, which is critical for the sustainability of pasture-based systems.
Resumo:
Improved nutrient utilization efficiency is strongly related to enhanced economic performance and reduced environmental footprint of dairy farms. Pasture-based systems are widely used for dairy production in certain areas of the world, but prediction equations of fresh grass nutritive value (nutrient digestibility and energy concentrations) are limited. Equations to predict digestible energy (DE) and metabolizable energy (ME) used for grazing cattle have been either developed with cattle fed conserved forage and concentrate diets or sheep fed previously frozen grass, and the majority of them require measurements less commonly available to producers, such as nutrient digestibility. The aim of the present study was therefore to develop prediction equations more suitable to grazing cattle for nutrient digestibility and energy concentrations, which are routinely available at farm level by using grass nutrient contents as predictors. A study with 33 nonpregnant, nonlactating cows fed solely fresh-cut grass at maintenance energy level for 50 wk was carried out over 3 consecutive grazing seasons. Freshly harvested grass of 3 cuts (primary growth and first and second regrowth), 9 fertilizer input levels, and contrasting stage of maturity (3 to 9 wk after harvest) was used, thus ensuring a wide representation of nutritional quality. As a result, a large variation existed in digestibility of dry matter (0.642-0.900) and digestible organic matter in dry matter (0.636-0.851) and in concentrations of DE (11.8-16.7 MJ/kg of dry matter) and ME (9.0-14.1 MJ/kg of dry matter). Nutrient digestibilities and DE and ME concentrations were negatively related to grass neutral detergent fiber (NDF) and acid detergent fiber (ADF) contents but positively related to nitrogen (N), gross energy, and ether extract (EE) contents. For each predicted variable (nutrient digestibilities or energy concentrations), different combinations of predictors (grass chemical composition) were found to be significant and increase the explained variation. For example, relatively higher R(2) values were found for prediction of N digestibility using N and EE as predictors; gross-energy digestibility using EE, NDF, ADF, and ash; NDF, ADF, and organic matter digestibilities using N, water-soluble carbohydrates, EE, and NDF; digestible organic matter in dry matter using water-soluble carbohydrates, EE, NDF, and ADF; DE concentration using gross energy, EE, NDF, ADF, and ash; and ME concentration using N, EE, ADF, and ash. Equations presented may allow a relatively quick and easy prediction of grass quality and, hence, better grazing utilization on commercial and research farms, where nutrient composition falls within the range assessed in the current study.
Resumo:
We formulate an agent-based population model of Escherichia coli cells which incorporates a description of the chemotaxis signalling cascade at the single cell scale. The model is used to gain insight into the link between the signalling cascade dynamics and the overall population response to differing chemoattractant gradients. Firstly, we consider how the observed variation in total (phosphorylated and unphosphorylated) signalling protein concentration affects the ability of cells to accumulate in differing chemoattractant gradients. Results reveal that a variation in total cell protein concentration between cells may be a mechanism for the survival of cell colonies across a wide range of differing environments. We then study the response of cells in the presence of two different chemoattractants.In doing so we demonstrate that the population scale response depends not on the absolute concentration of each chemoattractant but on the sensitivity of the chemoreceptors to their respective concentrations. Our results show the clear link between single cell features and the overall environment in which cells reside.
Resumo:
This paper presents a study on reduction of energy consumption in buildings through behaviour change informed by wireless monitoring systems for energy, environmental conditions and people positions. A key part to the Wi-Be system is the ability to accurately attribute energy usage behaviour to individuals, so they can be targeted with specific feedback tailored to their preferences. The use of wireless technologies for indoor positioning was investigated to ascertain the difficulties in deployment and potential benefits. The research to date has demonstrated the effectiveness of highly disaggregated personal-level data for developing insights into people’s energy behaviour and identifying significant energy saving opportunities (up to 77% in specific areas). Behavioural research addressed social issues such as privacy, which could affect the deployment of the system. Radio-frequency research into less intrusive technologies indicates that received-signal-strength-indicator-based systems should be able to detect the presence of a human body, though further work would be needed in both social and engineering areas.
Resumo:
This introduction to the Virtual Special Issue surveys the development of spatial housing economics from its roots in neo-classical theory, through more recent developments in social interactions modelling, and touching on the role of institutions, path dependence and economic history. The survey also points to some of the more promising future directions for the subject that are beginning to appear in the literature. The survey covers elements hedonic models, spatial econometrics, neighbourhood models, housing market areas, housing supply, models of segregation, migration, housing tenure, sub-national house price modelling including the so-called ripple effect, and agent-based models. Possible future directions are set in the context of a selection of recent papers that have appeared in Urban Studies. Nevertheless, there are still important gaps in the literature that merit further attention, arising at least partly from emerging policy problems. These include more research on housing and biodiversity, the relationship between housing and civil unrest, the effects of changing age distributions - notably housing for the elderly - and the impact of different international institutional structures. Methodologically, developments in Big Data provide an exciting framework for future work.
Resumo:
We study the relationship between the sentiment levels of Twitter users and the evolving network structure that the users created by @-mentioning each other. We use a large dataset of tweets to which we apply three sentiment scoring algorithms, including the open source SentiStrength program. Specifically we make three contributions. Firstly we find that people who have potentially the largest communication reach (according to a dynamic centrality measure) use sentiment differently than the average user: for example they use positive sentiment more often and negative sentiment less often. Secondly we find that when we follow structurally stable Twitter communities over a period of months, their sentiment levels are also stable, and sudden changes in community sentiment from one day to the next can in most cases be traced to external events affecting the community. Thirdly, based on our findings, we create and calibrate a simple agent-based model that is capable of reproducing measures of emotive response comparable to those obtained from our empirical dataset.
Resumo:
The widespread use of service-oriented architectures (SOAs) and Web services in commercial software requires the adoption of development techniques to ensure the quality of Web services. Testing techniques and tools concern quality and play a critical role in accomplishing quality of SOA based systems. Existing techniques and tools for traditional systems are not appropriate to these new systems, making the development of Web services testing techniques and tools required. This article presents new testing techniques to automatically generate a set of test cases and data for Web services. The techniques presented here explore data perturbation of Web services messages upon data types, integrity and consistency. To support these techniques, a tool (GenAutoWS) was developed and applied to real problems. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
The notion of knowledge artifact has rapidly gained popularity in the fields of general knowledge management and more recently knowledge-based systems. The main goal on this paper is to propose and discuss a methodology for the design and implementation of knowledge-based systems founded on knowledge artifacts. We advocate that the systems built according to this methodology can be effective to convey the flow of knowledge between different communities of practice. Our methodology has been developed from the ground up, i.e. we have built some concrete systems based on the abstract notion of knowledge artifact and synthesized our methodology based on reflections upon our experiences building these systems. In this paper, we also describe the most relevant systems we have built and how they have guided us to the synthesis of our proposed methodology. (C) 2008 Elsevier B.V. All rights reserved.